Goto

Collaborating Authors

 transformation function



A Modular Architecture Design for Autonomous Driving Racing in Controlled Environments

Fontan-Costas, Brais, Diaz-Cacho, M., Fernandez-Boullon, Ruben, Alonso-Carracedo, Manuel, Perez-Robles, Javier

arXiv.org Artificial Intelligence

Abstract--This paper presents an Autonomous System (AS) architecture for vehicles in a closed circuit. The AS performs precision tasks including computer vision for environment perception, positioning and mapping for accurate localization, path planning for optimal trajectory generation, and control for precise vehicle actuation. Each subsystem operates independently while connecting data through a cohesive pipeline architecture. The system implements a modular design that combines state-of-the-art technologies for real-time autonomous navigation in controlled environments. Autonomous vehicle systems in controlled environments present significant challenges in integrating multiple subsystems for real-time navigation and decision-making. The development of modular architectures that effectively combine perception, localization, path planning, and control systems represents a critical area of research in autonomous driving technology. This work presents a comprehensive framework for the connectivity and allocation of responsibilities within an autonomous driving architecture, focusing on precise operation in closed-circuit scenarios. The approach defines four primary modules: perception, localization and mapping, trajectory planning, and control.


Learning to Compose Domain-Specific Transformations for Data Augmentation

Neural Information Processing Systems

Data augmentation is a ubiquitous technique for increasing the size of labeled training sets by leveraging task-specific data transformations that preserve class labels. While it is often easy for domain experts to specify individual transformations, constructing and tuning the more sophisticated compositions typically needed to achieve state-of-the-art results is a time-consuming manual task in practice. We propose a method for automating this process by learning a generative sequence model over user-specified transformation functions using a generative adversarial approach. Our method can make use of arbitrary, non-deterministic transformation functions, is robust to misspecified user input, and is trained on unlabeled data. The learned transformation model can then be used to perform data augmentation for any end discriminative model. In our experiments, we show the efficacy of our approach on both image and text datasets, achieving improvements of 4.0 accuracy points on CIFAR-10, 1.4 F1 points on the ACE relation extraction task, and 3.4 accuracy points when using domain-specific transformation operations on a medical imaging dataset as compared to standard heuristic augmentation approaches.



Hypothesis Transfer Learning via Transformation Functions

Neural Information Processing Systems

We consider the Hypothesis Transfer Learning (HTL) problem where one incorporates a hypothesis trained on the source domain into the learning procedure of the target domain. Existing theoretical analysis either only studies specific algorithms or only presents upper bounds on the generalization error but not on the excess risk. In this paper, we propose a unified algorithm-dependent framework for HTL through a novel notion of transformation functions, which characterizes the relation between the source and the target domains. We conduct a general risk analysis of this framework and in particular, we show for the first time, if two domains are related, HTL enjoys faster convergence rates of excess risks for Kernel Smoothing and Kernel Ridge Regression than those of the classical non-transfer learning settings. We accompany this framework with an analysis of cross-validation for HTL to search for the best transfer technique and gracefully reduce to non-transfer learning when HTL is not helpful. Experiments on robotics and neural imaging data demonstrate the effectiveness of our framework.


Supplementary Material A More on Related Work

Neural Information Processing Systems

All of those works are for in-distribution fairness, and we investigate out-of-distribution fairness in this paper. In many real-world applications, distribution shifts are unavoidable. Thus the conditional distribution of the model's prediction also remains, as P Proposition B.2 (Transfer of fairness under subpopulation shift of sensitive attribute) . The proof is similar to the previous one. It suggests that encouraging fairness is able to alleviate spurious correlation.



Supplement to Node Classification on Graphs with Few-Shot Novel Labels via Meta Transformed Network Embedding 1 Additional Algorithm Details 1.1 Details of the Transformation Function

Neural Information Processing Systems

The support nodes are either positive or negative. For the transformation function, we stack multiple computation blocks as shown in Figure 1. The stacking mechanism helps the function capture comprehensive relationships between nodes such that the performance is boosted. In each computation block, there are mainly two modules. The detailed architecture of the self-attention module is illustrated in Figure 1.